Anthropic Sues US Government Over ’National Security Risk’ Designation
Artificial intelligence firm Anthropic has filed a lawsuit against the US government following its classification as a national security threat. The designation, which labels Anthropic as a supply chain risk, effectively bars federal contractors from using its Claude AI system in sensitive applications. Legal challenges were simultaneously filed in district court and the D.C. Circuit, alleging constitutional violations and regulatory overreach.
The company contends the government's actions represent an unlawful restriction on protected speech through administrative fiat. "The Constitution doesn't permit using procurement policy as a weapon against disfavored viewpoints," Anthropic stated in court filings. While maintaining its willingness to address legitimate security concerns, the AI developer seeks injunctive relief to prevent what it characterizes as politically motivated blacklisting.